perm filename COOP[W86,JMC] blob sn#812600 filedate 1986-03-08 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	coop[w86,jmc]		Logical questions re co-operating systems
C00007 ENDMK
C⊗;
coop[w86,jmc]		Logical questions re co-operating systems

One model of AI involves co-operating domain specialists, and
a few years ago someone (I forget who) proposed a time expert.
This made me nervous, because it seemed to me that time was
so intimately involved with actions that it seemed unlikely
that an action planner could get by with referring matters
of time to a time specialist.  It seemed rather that the
action planner would have to know a lot about time itself.
However, I couldn't formulate any precise objection.
Now maybe I can.

	Here is a framework for studying the problem in logic.
Each domain specialist has its own language.  For now we'll take
these to be first order languages, although according to a conjecture
that I'll mention later, this may have to be relaxed.  Communication
between the specialists will have to take place using the predicates
present in both languages, i.e. in the intersection of the two
languages.

	Immediately this suggests the relevance of a well known logical
result --- the Craig interpolation theorem.

Craig Interpolation Theorem --- Let $L1$ and $L2$ be two first order languages,
and suppose the formula  $p1 ⊃ p2$ is provable in the language $L1 ∪ L2$,
where $p1$ and $p2$ are formulas of $L1$ and $L2$ respectively.  Then
there is a formula $p$ of $L1 ∩ L2$ such that $p1 ⊃ p$ and $p ⊃ p2$ are
provable in $L1$ and $L2$ respectively.

	I believe the syntactic proof is such that the formula $p$ may be quite
long.

	However, co-operating domain experts seem to require a stronger
result than the Craig interpolation theorem.  Ideally the following
would be true.  Suppose a user of $L1$ with an $L1$-theory $T1$
 wishes to achieve a goal in
the Prolog sense, i.e. he wishes to find a tuplet $<x1,\ldots,xn>$
such that $P(x1,\ldots,xn)$.  However, he has to consult the user
of $L2$ who has a theory $T2$.
  To do this he must give $L2$ a query $Q(y1,\ldots,yn)>$
expressed in the common language $L1 ∩ L2$.  $L2$ returns an
answer to this query using his theory $T2$,
 and then $L1$ can answer his question.
Naturally, the query should be based entirely on $L1$'s own
theory $T1$, i.e. shouldn't involve any knowledge of $T2$.
Actually the problem may iterate back and forth in two ways.
First, $L2$ may have to query $L1$ in order to answer the
query $Q$, and second $L1$ after getting an answer to his
first query to $L2$ may have to ask additional questions.
Anyone who believes this will work in general is believing in a strong
form of the Santa Claus hypothesis.

	There is some hope, however, that something like this
may work in interesting special cases.  Moreover, if we go
by analogy to human co-operating specialists, it may be that
each side may have some meta-knowledge about the other.

	The mathematical problem is to formulate some simple
impossibility theorems and find some special cases in which
the dialog method will work.